In this project, you'll use generative adversarial networks to generate new images of faces.
You'll be using two datasets in this project:
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
#data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
data_dir = '/input'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
show_n_images = 8*8
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
show_n_images = 7*7
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).
You'll build the components necessary to build a GANs by implementing the following functions below:
model_inputsdiscriminatorgeneratormodel_lossmodel_opttrainThis will check to make sure you have the correct version of TensorFlow and access to a GPU
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
image_width, image_height, and image_channels.z_dim.Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
"""
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
"""
# TODO: Implement Function
images_input = tf.placeholder(tf.float32, [None, image_width, image_height, image_channels], name='input_real')
z_input = tf.placeholder(tf.float32, [None, z_dim], name='input_z')
lr = tf.placeholder(tf.float32, name='learning_rate')
return images_input, z_input, lr
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).
#Adding the alpha argument to the discriminator function since this is required for Leaky ReLU
def discriminator(images, reuse=False, alpha=0.2):
"""
Create the discriminator network
:param images: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
"""
#Setting kernel size a variable to tweak it and observe the outputs
kernel_size = 5 #3,5,7
#Create the variable scope
with tf.variable_scope('discriminator', reuse=reuse):
#As per the description in the problem statement, the images will be of size 28x28
#First convolution will transform them to 64 depth, strides=1 so we don't reduce height and width
img0 = tf.layers.conv2d(images, 64, kernel_size, strides=1, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
relu0 = tf.maximum(alpha * img0, img0)
# 28x28x64
#First convolution will transform them to 64x64
img1 = tf.layers.conv2d(relu0, 128, kernel_size, strides=2, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
bn_img1 = tf.layers.batch_normalization(img1, training=True)
relu1 = tf.maximum(alpha * bn_img1, bn_img1)
# 14x14x128
img2 = tf.layers.conv2d(relu1, 256, kernel_size, strides=2, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
bn_img2 = tf.layers.batch_normalization(img2, training=True)
relu2 = tf.maximum(alpha * bn_img2, bn_img2)
# 7x7x256
img3 = tf.layers.conv2d(relu2, 512, kernel_size, strides=2, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
bn_img3 = tf.layers.batch_normalization(img3, training=True)
relu3 = tf.maximum(alpha * bn_img3, bn_img3)
# Following the logic here we would get 3.5x3.5x256. However, with padding 'same' the
#tf.layers.conv2d formula is:
# out_height = ceil(float(in_height) / float(strides[1])) --> ceil(7/2) --> 4
# out_width = ceil(float(in_width) / float(strides[2])) --> ceil(7/2) --> 4
# therefore, output is 4x4x256
# Flatten it
flat = tf.reshape(relu3, (-1, 4*4*256))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
#Adding the alpha argument to the discriminator function since this is required for Leaky ReLU
def generator(z, out_channel_dim, is_train=True, alpha=0.2):
"""
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
"""
#Setting kernel size a variable to tweak it and observe the outputs
kernel_size = 5 #3,5,7
#Reuse should be the opposite of training. This means that when is_train = False then reuse=True and viceversa
reuse = not is_train
#Create the variable scope
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
#The first transformation here will be to 4x4x512
z1 = tf.layers.dense(z, 4*4*512)
# Reshape it to start the convolutional stack
z1 = tf.reshape(z1, (-1, 4, 4, 512))
z1 = tf.layers.batch_normalization(z1, training=is_train)
z1 = tf.maximum(alpha * z1, z1)
# The shape of z1 is 4x4x512
#To move from 4x4x512 to 7x7x256 we will need to use kernel_size = 4 and stride = 1 with padding valid
#To get to these values I used the formulas available in the documentation and tested it out
z2 = tf.layers.conv2d_transpose(z1, 256, 4, strides=1, padding='valid', kernel_initializer=tf.contrib.layers.xavier_initializer())
z2 = tf.layers.batch_normalization(z2, training=is_train)
z2 = tf.maximum(alpha * z2, z2)
#The shape of the tensor is 7x7x256
z3 = tf.layers.conv2d_transpose(z2, 128, kernel_size, strides=2, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
z3 = tf.layers.batch_normalization(z3, training=is_train)
z3 = tf.maximum(alpha * z3, z3)
#With padding = same only the stride matters to the shape is 14x14x128
#Changing stride=1 so the shape is preserved
z4 = tf.layers.conv2d_transpose(z3, 64, kernel_size, strides=1, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
z4 = tf.layers.batch_normalization(z4, training=is_train)
z4 = tf.maximum(alpha * z4, z4)
#With padding = same only the stride matters to the shape is 14x14x64
# Output layer
logits = tf.layers.conv2d_transpose(z4, out_channel_dim, kernel_size, strides=2, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
# 28x28xoutput_dim now
out = tf.tanh(logits)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
discriminator(images, reuse=False)generator(z, out_channel_dim, is_train=True)def model_loss(input_real, input_z, out_channel_dim, alpha=0.2):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
#Label smoother:
smooth = 0.1
g_out = generator(input_z, out_channel_dim, alpha=alpha)
#Next we will get the outputs and logits of the discriminator using the real image first and then the
#image generated by the generator above
d_out_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_out_fake, d_logits_fake = discriminator(g_out, reuse=True, alpha=alpha)
#Compute the loss both for the real outputs and the generated outputs.
#Note: the real images will always be evaluated against True (tf.ones_like) and the discriminator always
#evaluated against False (tf.zeros_like)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_out_real)*(1-smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_out_fake)*(1-smooth)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_out_fake)*(1-smooth)))
#Combine the losses for the discriminator:
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
"""
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
"""
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
Implement train to build and train the GANs. Use the following functions you implemented:
model_inputs(image_width, image_height, image_channels, z_dim)model_loss(input_real, input_z, out_channel_dim)model_opt(d_loss, g_loss, learning_rate, beta1)Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode, alpha=0.2):
"""
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
"""
#Stats for training
print_every = 20
show_every = 200
n_images_sample = 49
#First get the inputs: decomposing data_shape into its different elements
input_real, input_z, lr = model_inputs(data_shape[1], data_shape[2], data_shape[3], z_dim)
#Get the discriminator and generator loss models
d_loss, g_loss = model_loss(input_real, input_z, data_shape[3], alpha=alpha)
#Get the discriminator and generator optimizers
d_opt, g_opt = model_opt(d_loss, g_loss, lr, beta1)
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
# TODO: Train Model
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
#Rescale batch_images to (-1,1) since they are returned in (-0.5,0.5) and tanh outputs (-1,1)
batch_images = batch_images * 2
# Run optimizers, as recommended in the notebook, running the generator optimization twice
_ = sess.run(d_opt, feed_dict={input_real: batch_images, input_z: batch_z, lr: learning_rate})
_ = sess.run(g_opt, feed_dict={input_z: batch_z, input_real: batch_images, lr:learning_rate})
_ = sess.run(g_opt, feed_dict={input_z: batch_z, input_real: batch_images, lr:learning_rate})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = d_loss.eval({input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(epoch_i+1, epoch_count),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
if steps % show_every == 0:
show_generator_output(sess, n_images_sample, input_z, data_shape[3], data_image_mode)
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
#Tested batch_size of 32, 64,128 and 256. 64 yielded the images with good quality (no objective assessment,
# just me looking at the samples) and faster than 32
#Tried with an without label smoothing (i.e smooth =0 and smooth = 0.1) and the results seem to have converged faster
#and with higher quality using label smoothing
batch_size = 64
z_dim = 100
learning_rate = 0.0002
beta1 = 0.5
alpha = 0.2
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode, alpha=alpha)
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
#First attempt of hyperparameters: same as MNIST; 3 layers on the generator&discriminator network
#batch_size = 64
#z_dim = 100
#learning_rate = 0.0002
#beta1 = 0.5
#alpha = 0.2
#The results were faces (I could tell it was people!) but they were somewhat blurry. So, I kept these
#hyperparameters the same but added another convolutional layer both to discriminator and generator to see if
#adding more capacity helped
#Second attempt of hyperparameters: same as MNIST; 4 layers on the generator&discriminator network.
#batch_size = 64
#z_dim = 100
#learning_rate = 0.0002
#beta1 = 0.5
#alpha = 0.2
#With these parameters the images seemed a bit richer (more features in peoples faces) but progress was very slow
# decided to increase the learning_rate by an order of magnitude to see if it helped
#Third attempt of hyperparameters: same as before; but with learning_rate much larger
#batch_size = 64
#z_dim = 100
#learning_rate = 0.002
#beta1 = 0.5
#alpha = 0.2
#Fourth attempt of hyperparameters: learning_rate too high caused loss to jump around too much. Increased
#only slightly. Decreased the batch_size to 48 (halfway between 32 and 64)
#batch_size = 48
#z_dim = 100
#learning_rate = 0.0005
#beta1 = 0.5
#alpha = 0.2
#Fifth attempt of hyperparameters: Increased batch size to 56 since there was no noticeable improvement.
# reduced z_dim to 80 and reduced the learning rate futher more. Tweaked exponecial decay to be smaller
#batch_size = 56
#z_dim = 80
#learning_rate = 0.0004
#beta1 = 0.4
#alpha = 0.2
#6th attempt of hyperparameters: Increased batch size to 64 since there was no noticeable improvement.
# increased z_dim back to 100 and reduced the learning rate futher more. Tweaked exponecial decay to be smaller
#batch_size = 64
#z_dim = 100
#learning_rate = 0.0004
#beta1 = 0.6
#alpha = 0.2
#Produced very noisy images to reducing learning rate and sample size in next iteration
#7th iteration: adding xavier_initlization, reduced the learning_rate to 0.0001 and reduced the batch_size
batch_size = 32
z_dim = 100
learning_rate = 0.0001
beta1 = 0.5
alpha = 0.2
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode,alpha=alpha)
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.